124 research outputs found
Recommended from our members
Attention and awareness in human learning and decision making
This dissertation presents an investigation of the modifying role of attention and awareness in human learning and decision making. A series of experiments showed that performance in a range of tests of unconscious cognition can be better explained as resulting from conscious attention rather than from implicit processes.
The first three experiments utilised a modification of the Serial Reaction Time task in order to measure the interaction of implicit and explicit learning processes. The results did not show evidence for an interaction, but did exhibit an effect of explicit knowledge of the underlying rules of the task.
Subsequent studies examined the role of selective attention in learning. The investigation failed to provide evidence that learning inevitably results from the simple presentation of contingent stimuli over repeated trials. Instead, the learning effects appeared to be modulated by explicit attention to the association between stimuli. The following study with a novel test designed to measure the role of selective attention in prediction learning demonstrated that learning is not an obligatory consequence of simultaneous activation of representations of the associated stimuli. Rather, learning occurred only when attention was drawn explicitly to the association between the stimuli.
Finally, the Deliberation without Attention Paradigm was tested in a replication study along with two novel versions of the task. Additional assessment of the conscious status of participants’ judgments indicated that explicit deliberation and memory could best explain the effect and that the original test may not be a reliable measure of intuition.
In summary, the data in these studies did not require explanation in terms of unconscious cognition. These results do not preclude the possibility that unconscious processes could occur in these or other designs. However, the present work emphasises the role conscious attention plays in human learning and decision making
The Role of Human Fallibility in Psychological Research:A Survey of Mistakes in Data Management
Errors are an inevitable consequence of human fallibility, and researchers are no exception. Most researchers can recall major frustrations or serious time delays due to human errors while collecting, analyzing, or reporting data. The present study is an exploration of mistakes made during the data-management process in psychological research. We surveyed 488 researchers regarding the type, frequency, seriousness, and outcome of mistakes that have occurred in their research team during the last 5 years. The majority of respondents suggested that mistakes occurred with very low or low frequency. Most respondents reported that the most frequent mistakes led to insignificant or minor consequences, such as time loss or frustration. The most serious mistakes caused insignificant or minor consequences for about a third of respondents, moderate consequences for almost half of respondents, and major or extreme consequences for about one fifth of respondents. The most frequently reported types of mistakes were ambiguous naming/defining of data, version control error, and wrong data processing/analysis. Most mistakes were reportedly due to poor project preparation or management and/or personal difficulties (physical or cognitive constraints). With these initial exploratory findings, we do not aim to provide a description representative for psychological scientists but, rather, to lay the groundwork for a systematic investigation of human fallibility in research data management and the development of solutions to reduce errors and mitigate their impact
SampleSizePlanner:A Tool to Estimate and Justify Sample Size for Two-Group Studies
Planning sample size often requires researchers to identify a statistical technique and to make several choices during their calculations. Currently, there is a lack of clear guidelines for researchers to find and use the applicable procedure. In the present tutorial, we introduce a web app and R package that offer nine different procedures to determine and justify the sample size for independent two-group study designs. The application highlights the most important decision points for each procedure and suggests example justifications for them. The resulting sample-size report can serve as a template for preregistrations and manuscripts
Phasic affective signals by themselves do not regulate cognitive control
Cognitive control is a set of mechanisms that help us process conflicting stimuli and maintain goal-relevant behaviour. According to the Affective Signalling Hypothesis, conflicting stimuli are aversive and thus elicit (negative) affect, moreover – to avoid aversive signals – affective and cognitive systems work together by increasing control and thus, drive conflict adaptation. Several studies have found that affective stimuli can indeed modulate conflict adaptation, however, there is currently no evidence that phasic affective states not triggered by conflict also trigger improved cognitive control. To investigate this possibility, we intermixed trials of a conflict task and trials involving the passive viewing of emotional words. We tested whether affective states induced by affective words in a given trial trigger improved cognitive control in a subsequent conflict trial. Applying Bayesian analysis, the results of four experiments supported the lack of adaptation to aversive signals, both in terms of valence and arousal. These results suggest that phasic affective states by themselves are not sufficient to elicit an increase in control
Quantifying Support for the Null Hypothesis in Psychology: An Empirical Investigation
In the traditional statistical framework, nonsignificant results leave researchers in a state of suspended disbelief. In this study, we examined, empirically, the treatment and evidential impact of nonsignificant results. Our specific goals were twofold: to explore how psychologists interpret and communicate nonsignificant results and to assess how much these results constitute evidence in favor of the null hypothesis. First, we examined all nonsignificant findings mentioned in the abstracts of the 2015 volumes of Psychonomic Bulletin & Review, Journal of Experimental Psychology: General, and Psychological Science (N = 137). In 72% of these cases, nonsignificant results were misinterpreted, in that the authors inferred that the effect was absent. Second, a Bayes factor reanalysis revealed that fewer than 5% of the nonsignificant findings provided strong evidence (i.e., BF01 > 10) in favor of the null hypothesis over the alternative hypothesis. We recommend that researchers expand their statistical tool kit in order to correctly interpret nonsignificant results and to be able to evaluate the evidence for and against the null hypothesis
A Consensus-Based Transparency Checklist
We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository
Recommended from our members
Is there evidence for cross-domain congruency sequence effect? A replication of Kan et al. (2013)
Exploring the mechanisms of cognitive control is central to understanding how we control our behaviour. These mechanisms can be studied in conflict paradigms, which require the inhibition of irrelevant responses to perform the task. It has been suggested that in these tasks, the detection of conflict enhances cognitive control resulting in improved conflict resolution of subsequent trials. If this is the case, then this so-called congruency sequence effect can be expected to occur in cross-domain tasks. Previous research on the domain-generality of the effect presented inconsistent results. In this study, we provide a multi-site replication of three previous experiments of Kan et al. (Kan IP, Teubner-Rhodes S, Drummey AB, Nutile L, Krupa L, Novick JM 2013 Cognition 129, 637-651) which test congruency sequence effect between very different domains: from a syntactic to a non-syntactic domain (Experiment 1), and from a perceptual to a verbal domain (Experiments 2 and 3). Despite all our efforts, we found only partial support for the claims of the original study. With a single exception, we could not replicate the original findings; the data remained inconclusive or went against the theoretical hypothesis. We discuss the compatibility of the results with alternative theoretical frameworks
- …